About this Journal  |  Author Guidelines  |   Submit a Manuscript     

International Journal on Consulting Psychology for Patients

Volume 2, No. 2, 2018, pp 1-6
http://dx.doi.org/10.21742/ijcpp.2018.2.2.01

Abstract



Evaluation of Automatic Scoring in Clinical Performance Oral Examination



    Bee-sung Kam
    Pusan National University School of Medicine
    beesung@pnu.ac.kr

    Abstract

    Manual scoring of clinical performance oral examination is a labor intensive task and causes variance between scores. By automating assessments, the reproducibility of exams can be improved, and costs associated with manual scoring can also be reduced. The purpose of this study was to compare the automatic method of scoring with manual scoring of clinical performance examinations. Forty-two files of existing student data were selected at Pusan National University School of Medicine (PNUSOM) and distributed into three sets, schema comprehension score, overall performance satisfaction and written assessment automated score. The variable examined was the hierarchical relations of class dizziness in the clinical performance category. Results were obtained for 42 pairs of data in two stages, one comparing the manual score and the automatic score, and the second compared the automatic score and manual performance satisfaction. The reliability of data automation was 0.38. A reasonable benchmark for performance can be established to further redistribute examinations to self-assessment.


 

Contact Us

  • PO Box 5074, Sandy Bay Tasmania 7005, Australia
  • Phone: +61 3 9028 5994